35 research outputs found

    Active inference, evidence accumulation, and the urn task

    Get PDF
    Deciding how much evidence to accumulate before making a decision is a problem we and other animals often face, but one that is not completely understood. This issue is particularly important because a tendency to sample less information (often known as reflection impulsivity) is a feature in several psychopathologies, such as psychosis. A formal understanding of information sampling may therefore clarify the computational anatomy of psychopathology. In this theoretical letter, we consider evidence accumulation in terms of active (Bayesian) inference using a generic model of Markov decision processes. Here, agents are equipped with beliefs about their own behavior--in this case, that they will make informed decisions. Normative decision making is then modeled using variational Bayes to minimize surprise about choice outcomes. Under this scheme, different facets of belief updating map naturally onto the functional anatomy of the brain (at least at a heuristic level). Of particular interest is the key role played by the expected precision of beliefs about control, which we have previously suggested may be encoded by dopaminergic neurons in the midbrain. We show that manipulating expected precision strongly affects how much information an agent characteristically samples, and thus provides a possible link between impulsivity and dopaminergic dysfunction. Our study therefore represents a step toward understanding evidence accumulation in terms of neurobiologically plausible Bayesian inference and may cast light on why this process is disordered in psychopathology

    Computational Phenotyping in Psychiatry: A Worked Example

    Get PDF
    Computational psychiatry is a rapidly emerging field that uses model-based quantities to infer the behavioral and neuronal abnormalities that underlie psychopathology. If successful, this approach promises key insights into (pathological) brain function as well as a more mechanistic and quantitative approach to psychiatric nosology-structuring therapeutic interventions and predicting response and relapse. The basic procedure in computational psychiatry is to build a computational model that formalizes a behavioral or neuronal process. Measured behavioral (or neuronal) responses are then used to infer the model parameters of a single subject or a group of subjects. Here, we provide an illustrative overview over this process, starting from the modeling of choice behavior in a specific task, simulating data, and then inverting that model to estimate group effects. Finally, we illustrate cross-validation to assess whether between-subject variables (e.g., diagnosis) can be recovered successfully. Our worked example uses a simple two-step maze task and a model of choice behavior based on (active) inference and Markov decision processes. The procedural steps and routines we illustrate are not restricted to a specific field of research or particular computational model but can, in principle, be applied in many domains of computational psychiatry

    An Active Inference Approach to Modeling Structure Learning: Concept Learning as an Example Case

    Get PDF
    Within computational neuroscience, the algorithmic and neural basis of structure learning remains poorly understood. Concept learning is one primary example, which requires both a type of internal model expansion process (adding novel hidden states that explain new observations), and a model reduction process (merging different states into one underlying cause and thus reducing model complexity via meta-learning). Although various algorithmic models of concept learning have been proposed within machine learning and cognitive science, many are limited to various degrees by an inability to generalize, the need for very large amounts of training data, and/or insufficiently established biological plausibility. Using concept learning as an example case, we introduce a novel approach for modeling structure learning—and specifically state-space expansion and reduction—within the active inference framework and its accompanying neural process theory. Our aim is to demonstrate its potential to facilitate a novel line of active inference research in this area. The approach we lay out is based on the idea that a generative model can be equipped with extra (hidden state or cause) “slots” that can be engaged when an agent learns about novel concepts. This can be combined with a Bayesian model reduction process, in which any concept learning—associated with these slots—can be reset in favor of a simpler model with higher model evidence. We use simulations to illustrate this model’s ability to add new concepts to its state space (with relatively few observations) and increase the granularity of the concepts it currently possesses. We also simulate the predicted neural basis of these processes. We further show that it can accomplish a simple form of “one-shot” generalization to new stimuli. Although deliberately simple, these simulation results highlight ways in which active inference could offer useful resources in developing neurocomputational models of structure learning. They provide a template for how future active inference research could apply this approach to real-world structure learning problems and assess the added utility it may offer

    Older adults fail to form stable task representations during model-based reversal inference

    Get PDF
    Older adults struggle in dealing with changeable and uncertain environments across several cognitive domains. This has been attributed to difficulties in forming adequate task representations that help navigate uncertain environments. Here, we investigate how, in older adults, inadequate task representations impact on model-based reversal learning. We combined computational modeling and pupillometry during a novel model-based reversal learning task, which allowed us to isolate the relevance of task representations at feedback evaluation. We find that older adults overestimate the changeability of task states and consequently are less able to converge on unequivocal task representations through learning. Pupillometric measures and behavioral data show that these unreliable task representations in older adults manifest as a reduced ability to focus on feedback that is relevant for updating task representations, and as a reduced metacognitive awareness in the accuracy of their actions. Instead, the data suggested older adults' choice behavior was more consistent with a guidance by uninformative feedback properties such as outcome valence. Our study highlights that an inability to form adequate task representations may be a crucial factor underlying older adults' impaired model-based inference

    Evidence for surprise minimization over value maximization in choice behavior

    Get PDF
    Classical economic models are predicated on the idea that the ultimate aim of choice is to maximize utility or reward. In contrast, an alternative perspective highlights the fact that adaptive behavior requires agents' to model their environment and minimize surprise about the states they frequent. We propose that choice behavior can be more accurately accounted for by surprise minimization compared to reward or utility maximization alone. Minimizing surprise makes a prediction at variance with expected utility models; namely, that in addition to attaining valuable states, agents attempt to maximize the entropy over outcomes and thus 'keep their options open'. We tested this prediction using a simple binary choice paradigm and show that human decision-making is better explained by surprise minimization compared to utility maximization. Furthermore, we replicated this entropy-seeking behavior in a control task with no explicit utilities. These findings highlight a limitation of purely economic motivations in explaining choice behavior and instead emphasize the importance of belief-based motivations

    A nice surprise? Predictive processing and the active pursuit of novelty

    Get PDF
    Recent work in cognitive and computational neuroscience depicts human brains as devices that minimize prediction error signals: signals that encode the difference between actual and expected sensory stimulations. This raises a series of puzzles whose common theme concerns a potential misfit between this bedrock informationtheoretic vision and familiar facts about the attractions of the unexpected. We humans often seem to actively seek out surprising events, deliberately harvesting novel and exciting streams of sensory stimulation. Conversely, we often experience some wellexpected sensations as unpleasant and to-be-avoided. In this paper, I explore several core and variant forms of this puzzle, using them to display multiple interacting elements that together deliver a satisfying solution. That solution requires us to go beyond the discussion of simple information-theoretic imperatives (such as 'minimize long-term prediction error') and to recognize the essential role of species-specific prestructuring, epistemic foraging, and cultural practices in shaping the restless, curious, novelty-seeking human mind

    Active Inference, Novelty and Neglect

    Get PDF
    In this chapter, we provide an overview of the principles of active inference. We illustrate how different forms of short-term memory are expressed formally (mathematically) through appealing to beliefs about the causes of our sensations and about the actions we pursue. This is used to motivate an approach to active vision that depends upon inferences about the causes of 'what I have seen' and learning about 'what I would see if I were to look there'. The former could manifest as persistent 'delay-period' activity - of the sort associated with working memory, while the latter is better suited to changes in synaptic efficacy - of the sort that underlies short-term learning and adaptation. We review formulations of these ideas in terms of active inference, their role in directing visual exploration and the consequences - for active vision - of their failures. To illustrate the latter, we draw upon some of our recent work on the computational anatomy of visual neglect

    Hierarchically structured representations facilitate visual understanding

    No full text
    Biological agents are adept at flexibly solving a wide range of cognitively challenging decision-making problems given woefully little experience. This capacity rests on one fact about the problems themselves: that there is substantial recurring structure; and two facts about us: that we can extract the structure and build internal representations of it based on the statistics of observations, and that we can use those representations when solving new tasks. Artificial agents could benefit from copying these characteristics. An important form of statistical structure is a hierarchy. We therefore investigated the formation of hierarchical representations in human subjects using a novel, sophisticated, shape composition task, in which subjects learn how composite shapes are formed from a restricted set of basic building blocks. Understanding a new shape in these terms has been shown to involve a form of internal, imagined, construction process. The task involved hierarchical structure with certain pairs of building blocks tending to co-occur as hierarchical ’chunks’. Picking up on these chunks would facilitate the task of understanding new shapes. We found that subjects learnt and employed hierarchically structured representations when composing visual shapes. Further, we found that subjects generalised these structured representations to unseen stimuli. Subjects correctly identified previously unseen shapes that contained hierarchical structure to be more likely to be part of the training set compared to random shapes with no hierarchical structure. Further, when asked to complete novel shapes, subjects relied on hierarchical structure to generate solutions. Taken together, this suggests humans possess strong inductive biases for learning, employing, and generalising hierarchical structures in visual understanding. The computational and neural bases of these capacities are not yet clear

    Compositional Neural Representations in the Hippocampal Formation and Prefrontal Cortex Underlie Visual Construction and Planning

    No full text
    The hippocampal formation is critical for spatial and relational inference in navigation problems. The neural code underlying such inference is factorized in the entorhinal cortex (EC) and conjunctive in the hippocampus (HC). A factorized code implies a separate encoding of sensory and relational knowledge, which can be flexibly conjoined to an object representation that reflects both sensory and relational properties. We hypothesize that the same neural mechanisms are employed in complex decision-making and compositional planning, which requires the flexible generalization of knowledge to novel instances. We tested this hypothesis in a task where subjects had to construct novel visual objects based on a set of basic visual building blocks and relations. We found behavioral evidence that subjects form a hierarchical representation of this task that allows them to flexibly apply compositional knowledge to novel stimuli. Using fMRI adaption, we found evidence that the construction of novel objects depends on compositional neural representations in HC-EC and medial prefrontal cortex (mPFC). Further, we found that these structures also encoded purely relational information, indicative of a factorized representation. These results suggest that compositional neural representations in the hippocampal formation and prefrontal cortex enable the generalization of abstract knowledge to novel stimuli during visual construction

    Evidence for entropy maximisation in human free choice behaviour

    No full text
    The freedom to choose between options is strongly linked to notions of free will. Accordingly, several studies have shown that individuals demonstrate a preference for choice, or the availability of multiple options, over and above utilitarian value. Yet we lack a decision-making framework that integrates preference for choice with traditional utility maximisation in free choice behaviour. Here we test the predictions of an inference-based model of decision-making in which an agent actively seeks states yielding entropy (availability of options) in addition to utility (economic reward). We designed a study in which participants freely navigated a virtual environment consisting of two consecutive choices leading to reward locations in separate rooms. Critically, the choice of one room always led to two final doors while, in the second room, only one door was permissible to choose. This design allowed us to separately determine the influence of utility and entropy on participants' choice behaviour and their self-evaluation of free will. We found that choice behaviour was better predicted by an inference-based model than by expected utility alone, and that both the availability of options and the value of the context positively influenced participants' perceived freedom of choice. Moreover, this consideration of options was apparent in the ongoing motion dynamics as individuals navigated the environment. In a second study, in which participants selected between rooms that gave access to three or four doors, we observed a similar pattern of results, with participants preferring the room that gave access to more options and feeling freer in it. These results suggest that free choice behaviour is well explained by an inference-based framework in which both utility and entropy are optimised and supports the idea that the feeling of having free will is tightly related to options availability
    corecore